Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12465, 2023.
Article in English | Scopus | ID: covidwho-20240716

ABSTRACT

This paper proposes an automated classification method of COVID-19 chest CT volumes using improved 3D MLP-Mixer. Novel coronavirus disease 2019 (COVID-19) spreads over the world, causing a large number of infected patients and deaths. Sudden increase in the number of COVID-19 patients causes a manpower shortage in medical institutions. Computer-aided diagnosis (CAD) system provides quick and quantitative diagnosis results. CAD system for COVID-19 enables efficient diagnosis workflow and contributes to reduce such manpower shortage. In image-based diagnosis of viral pneumonia cases including COVID-19, both local and global image features are important because viral pneumonia cause many ground glass opacities and consolidations in large areas in the lung. This paper proposes an automated classification method of chest CT volumes for COVID-19 diagnosis assistance. MLP-Mixer is a recent method of image classification using Vision Transformer-like architecture. It performs classification using both local and global image features. To classify 3D CT volumes, we developed a hybrid classification model that consists of both a 3D convolutional neural network (CNN) and a 3D version of the MLP-Mixer. Classification accuracy of the proposed method was evaluated using a dataset that contains 1205 CT volumes and obtained 79.5% of classification accuracy. The accuracy was higher than that of conventional 3D CNN models consists of 3D CNN layers and simple MLP layers. © 2023 SPIE.

2.
12th International Conference on Electrical and Computer Engineering, ICECE 2022 ; : 248-251, 2022.
Article in English | Scopus | ID: covidwho-2290742

ABSTRACT

Right at the end of 2019, the world saw an outbreak of a new type of SARS (severe acute respiratory syndrome) disease, SARS-Cov-2, or COVID-19. Even in 2022, around 1 million people worldwide are getting infected with the virus every day. To date, more than 6 million people have died as a result of the virus. To tackle the pandemic, the first step is to successfully detect the virus among the mass population. The most popular method is the RT-PCR test, which, unfortunately, is not always conclusive. The physicians thus suggest lung CT tests for the patients for clinical relevance. But the problem with lung CT scans for the detection of coronavirus is that the COVID-19 infected scan is very similar to community-affected pneumonia (CAP) infected scan, and the results in many cases get wrongly interpreted. In addition, the virus is always mutating into different strains, and the severity and infection pattern slightly change with each mutation. Because of this rapid mutation, a large and balanced dataset of lung CT scans is not always available. In this work, we systematically evaluate the accuracy of a deep 3D convolutional neural network (CNN) on a small-scale and highly imbalanced dataset of lung CT scans (the SPGC COVID 2021 dataset). Our experiments show that it can outperform previous state-of-the-art 3D CNN models with proper regularization, an appropriate number of dense layers, and a weighted loss function. Our research, therefore, suggests an effective solution for identifying COVID-19 in lung CT scans using deep learning for small and highly imbalanced datasets. © 2022 IEEE.

3.
IEEE Trans Artif Intell ; 3(2): 129-138, 2022 Apr.
Article in English | MEDLINE | ID: covidwho-1948841

ABSTRACT

Amidst the ongoing pandemic, the assessment of computed tomography (CT) images for COVID-19 presence can exceed the workload capacity of radiologists. Several studies addressed this issue by automating COVID-19 classification and grading from CT images with convolutional neural networks (CNNs). Many of these studies reported initial results of algorithms that were assembled from commonly used components. However, the choice of the components of these algorithms was often pragmatic rather than systematic and systems were not compared to each other across papers in a fair manner. We systematically investigated the effectiveness of using 3-D CNNs instead of 2-D CNNs for seven commonly used architectures, including DenseNet, Inception, and ResNet variants. For the architecture that performed best, we furthermore investigated the effect of initializing the network with pretrained weights, providing automatically computed lesion maps as additional network input, and predicting a continuous instead of a categorical output. A 3-D DenseNet-201 with these components achieved an area under the receiver operating characteristic curve of 0.930 on our test set of 105 CT scans and an AUC of 0.919 on a publicly available set of 742 CT scans, a substantial improvement in comparison with a previously published 2-D CNN. This article provides insights into the performance benefits of various components for COVID-19 classification and grading systems. We have created a challenge on grand-challenge.org to allow for a fair comparison between the results of this and future research.

4.
Comput Biol Med ; 145: 105464, 2022 06.
Article in English | MEDLINE | ID: covidwho-1768009

ABSTRACT

BACKGROUND: Artificial intelligence technologies in classification/detection of COVID-19 positive cases suffer from generalizability. Moreover, accessing and preparing another large dataset is not always feasible and time-consuming. Several studies have combined smaller COVID-19 CT datasets into "supersets" to maximize the number of training samples. This study aims to assess generalizability by splitting datasets into different portions based on 3D CT images using deep learning. METHOD: Two large datasets, including 1110 3D CT images, were split into five segments of 20% each. Each dataset's first 20% segment was separated as a holdout test set. 3D-CNN training was performed with the remaining 80% from each dataset. Two small external datasets were also used to independently evaluate the trained models. RESULTS: The total combination of 80% of each dataset has an accuracy of 91% on Iranmehr and 83% on Moscow holdout test datasets. Results indicated that 80% of the primary datasets are adequate for fully training a model. The additional fine-tuning using 40% of a secondary dataset helps the model generalize to a third, unseen dataset. The highest accuracy achieved through transfer learning was 85% on LDCT dataset and 83% on Iranmehr holdout test sets when retrained on 80% of Iranmehr dataset. CONCLUSION: While the total combination of both datasets produced the best results, different combinations and transfer learning still produced generalizable results. Adopting the proposed methodology may help to obtain satisfactory results in the case of limited external datasets.


Subject(s)
COVID-19 , Deep Learning , Artificial Intelligence , COVID-19/diagnostic imaging , Humans , Neural Networks, Computer , Tomography, X-Ray Computed/methods
5.
Diagnostics (Basel) ; 11(11)2021 Oct 20.
Article in English | MEDLINE | ID: covidwho-1480630

ABSTRACT

(1) Background: COVID-19 has been global epidemic. This work aims to extract 3D infection from COVID-19 CT images; (2) Methods: Firstly, COVID-19 CT images are processed with lung region extraction and data enhancement. In this strategy, gradient changes of voxels in different directions respond to geometric characteristics. Due to the complexity of tubular tissues in lung region, they are clustered to the lung parenchyma center based on their filtered possibility. Thus, infection is improved after data enhancement. Then, deep weighted UNet is established to refining 3D infection texture, and weighted loss function is introduced. It changes cost calculation of different samples, causing target samples to dominate convergence direction. Finally, the trained network effectively extracts 3D infection from CT images by adjusting driving strategy of different samples. (3) Results: Using Accuracy, Precision, Recall and Coincidence rate, 20 subjects from a private dataset and eight subjects from Kaggle Competition COVID-19 CT dataset tested this method in hold-out validation framework. This work achieved good performance both in the private dataset (99.94-00.02%, 60.42-11.25%, 70.79-09.35% and 63.15-08.35%) and public dataset (99.73-00.12%, 77.02-06.06%, 41.23-08.61% and 52.50-08.18%). We also applied some extra indicators to test data augmentation and different models. The statistical tests have verified the significant difference of different models. (4) Conclusions: This study provides a COVID-19 infection segmentation technology, which provides an important prerequisite for the quantitative analysis of COVID-19 CT images.

6.
Inform Med Unlocked ; 26: 100709, 2021.
Article in English | MEDLINE | ID: covidwho-1373080

ABSTRACT

The novel COVID-19 is a global pandemic disease overgrowing worldwide. Computer-aided screening tools with greater sensitivity are imperative for disease diagnosis and prognosis as early as possible. It also can be a helpful tool in triage for testing and clinical supervision of COVID-19 patients. However, designing such an automated tool from non-invasive radiographic images is challenging as many manually annotated datasets are not publicly available yet, which is the essential core requirement of supervised learning schemes. This article proposes a 3D Convolutional Neural Network (CNN)-based classification approach considering both the inter-and intra-slice spatial voxel information. The proposed system is trained end-to-end on the 3D patches from the whole volumetric Computed Tomography (CT) images to enlarge the number of training samples, performing the ablation studies on patch size determination. We integrate progressive resizing, segmentation, augmentations, and class-rebalancing into our 3D network. The segmentation is a critical prerequisite step for COVID-19 diagnosis enabling the classifier to learn prominent lung features while excluding the outer lung regions of the CT scans. We evaluate all the extensive experiments on a publicly available dataset named MosMed, having binary- and multi-class chest CT image partitions. Our experimental results are very encouraging, yielding areas under the Receiver Operating Characteristics (ROC) curve of 0 . 914 ± 0 . 049 and 0 . 893 ± 0 . 035 for the binary- and multi-class tasks, respectively, applying 5-fold cross-validations. Our method's promising results delegate it as a favorable aiding tool for clinical practitioners and radiologists to assess COVID-19.

7.
Front Med (Lausanne) ; 7: 612962, 2020.
Article in English | MEDLINE | ID: covidwho-1082575

ABSTRACT

A three-dimensional (3D) deep learning method is proposed, which enables the rapid diagnosis of coronavirus disease 2019 (COVID-19) and thus significantly reduces the burden on radiologists and physicians. Inspired by the fact that the current chest computed tomography (CT) datasets are diversified in equipment types, we propose a COVID-19 graph in a graph convolutional network (GCN) to incorporate multiple datasets that differentiate the COVID-19 infected cases from normal controls. Specifically, we first apply a 3D convolutional neural network (3D-CNN) to extract image features from the initial 3D-CT images. In this part, a transfer learning method is proposed to improve the performance, which uses the task of predicting equipment type to initialize the parameters of the 3D-CNN structure. Second, we design a COVID-19 graph in GCN based on the extracted features. The graph divides all samples into several clusters, and samples with the same equipment type compose a cluster. Then we establish edge connections between samples in the same cluster. To compute accurate edge weights, we propose to combine the correlation distance of the extracted features and the score differences of subjects from the 3D-CNN structure. Lastly, by inputting the COVID-19 graph into GCN, we obtain the final diagnosis results. In experiments, the dataset contains 399 COVID-19 infected cases, and 400 normal controls from six equipment types. Experimental results show that the accuracy, sensitivity, and specificity of our method reach 98.5%, 99.9%, and 97%, respectively.

SELECTION OF CITATIONS
SEARCH DETAIL